The proliferation of smartphones has accelerated mobility studies by largely increasing the type and volume of mobility data available. One such source of mobility data is from GPS technology, which is becoming increasingly common and helps the research community understand mobility patterns of people. However, there lacks a standardized framework for studying the different mobility patterns created by the non-Work, non-Home locations of Working and Nonworking users on Workdays and Offdays using machine learning methods. We propose a new mobility metric, Daily Characteristic Distance, and use it to generate features for each user together with Origin-Destination matrix features. We then use those features with an unsupervised machine learning method, $k$-means clustering, and obtain three clusters of users for each type of day (Workday and Offday). Finally, we propose two new metrics for the analysis of the clustering results, namely User Commonality and Average Frequency. By using the proposed metrics, interesting user behaviors can be discerned and it helps us to better understand the mobility patterns of the users.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
The role of mobile cameras increased dramatically over the past few years, leading to more and more research in automatic image quality enhancement and RAW photo processing. In this Mobile AI challenge, the target was to develop an efficient end-to-end AI-based image signal processing (ISP) pipeline replacing the standard mobile ISPs that can run on modern smartphone GPUs using TensorFlow Lite. The participants were provided with a large-scale Fujifilm UltraISP dataset consisting of thousands of paired photos captured with a normal mobile camera sensor and a professional 102MP medium-format FujiFilm GFX100 camera. The runtime of the resulting models was evaluated on the Snapdragon's 8 Gen 1 GPU that provides excellent acceleration results for the majority of common deep learning ops. The proposed solutions are compatible with all recent mobile GPUs, being able to process Full HD photos in less than 20-50 milliseconds while achieving high fidelity results. A detailed description of all models developed in this challenge is provided in this paper.
translated by 谷歌翻译
深度学习(DL)的快速增长和部署目睹了新兴的隐私和安全问题。为了减轻这些问题,已经讨论了安全的多方计算(MPC),以实现隐私保护DL计算。在实践中,它们通常是在很高的计算和沟通开销中,并有可能禁止其在大规模系统中的受欢迎程度。两种正交研究趋势吸引了人们对安全深度学习的能源效率的巨大兴趣,即MPC比较方案的高架降低和硬件加速度。但是,他们要么达到较低的减少比率,因此由于计算和通信节省有限而遭受了高潜伏期,或者是渴望的,因为现有的作品主要集中在CPU和GPU等一般计算平台上。在这项工作中,作为第一次尝试,我们通过将加密构件构建块的硬件延迟整合到DNN损耗功能中,以实现高能量效率,开发了一个系统的polympcnet,以减少MPC比较协议和硬件加速的联合额外降低的系统框架Polympcnet。和安全保证。我们的关键设计原理不是在DNN进行良好训练之后(通过删除或删除某些非物质操作员)训练(通过删除或删除某些非物质操作员)之后检查模型敏感性,而是要准确地执行DNN设计中的假设 - 培训DNN既是DNN都硬件有效且安全,同时逃脱了当地的最小值和鞍点并保持高精度。更具体地说,我们提出了通过多项式激活初始化方法直接提出的加密硬件友好的可训练多项式激活功能,以替代昂贵的2P-RELU操作员。我们开发了一个密码硬件调度程序和现场可编程门阵列(FPGA)平台的相应性能模型。
translated by 谷歌翻译
半监督学习(SSL)通过利用大量未标记数据来增强有限标记的样品来改善模型的概括。但是,目前,流行的SSL评估协议通常受到计算机视觉(CV)任务的约束。此外,以前的工作通常从头开始训练深层神经网络,这是耗时且环境不友好的。为了解决上述问题,我们通过从简历,自然语言处理(NLP)和音频处理(AUDIO)中选择15种不同,具有挑战性和全面的任务来构建统一的SSL基准(USB),我们会系统地评估主导的SSL方法,以及开源的一个模块化和可扩展的代码库,以对这些SSL方法进行公平评估。我们进一步为简历任务提供了最新的神经模型的预训练版本,以使成本负担得起,以进行进一步调整。 USB启用对来自多个域的更多任务的单个SSL算法的评估,但成本较低。具体而言,在单个NVIDIA V100上,仅需要37个GPU天才能在USB中评估15个任务的FIXMATCH,而335 GPU天(除ImageNet以外的4个CV数据集中的279 GPU天)在使用典型协议的5个CV任务上需要进行5个CV任务。
translated by 谷歌翻译
基于无监督的域适应性(UDA),由于目标情景的表现有希望的表现,面部抗散热器(FAS)方法引起了人们的注意。大多数现有的UDA FAS方法通常通过对齐语义高级功能的分布来拟合受过训练的模型。但是,对未标记的目标域的监督不足,低水平特征对齐降低了现有方法的性能。为了解决这些问题,我们提出了UDA FAS的新颖观点,该视角将目标数据直接适合于模型,即,通过图像翻译将目标数据风格化为源域样式,并进一步将风格化的数据提供给训练有素的数据分类的源模型。提出的生成域适应(GDA)框架结合了两个精心设计的一致性约束:1)域间神经统计量的一致性指导发生器缩小域间间隙。 2)双层语义一致性确保了风格化图像的语义质量。此外,我们提出了域内频谱混合物,以进一步扩大目标数据分布,以确保概括并减少域内间隙。广泛的实验和可视化证明了我们方法对最新方法的有效性。
translated by 谷歌翻译
随着各种面部表现攻击不断出现,基于域概括(DG)的面部抗散热(FAS)方法引起了人们的注意。现有的基于DG的FAS方法始终捕获用于概括各种看不见域的域不变功能。但是,他们忽略了单个源域的歧视性特征和不同域的不同域特异性信息,并且训练有素的模型不足以适应各种看不见的域。为了解决这个问题,我们提出了专家学习(AMEL)框架的自适应混合物,该框架利用了特定于域的信息以适应性地在可见的源域和看不见的目标域之间建立链接,以进一步改善概括。具体而言,特定领域的专家(DSE)旨在研究歧视性和独特的域特异性特征,以作为对共同域不变特征的补充。此外,提出了动态专家聚合(DEA),以根据与看不见的目标域相关的域相关的每个源专家的互补信息来自适应地汇总信息。并结合元学习,这些模块合作,可适应各种看不见的目标域的有意义的特定于域特异性信息。广泛的实验和可视化证明了我们对最先进竞争者的方法的有效性。
translated by 谷歌翻译
物联网设备越来越多地通过神经网络模型实施,以启用智能应用程序。从环境环境中收集能源的能源收集(EH)技术是电池可为这些设备供电的有前途的替代方法,因为维护成本较低和能源的广泛可用性。但是,能量收割机提供的功率很低,并且具有不稳定性的固有缺点,因为它随环境环境而变化。本文提出了EVE,EVE是一种自动化机器学习(AUTOML)共同探索框架,以搜索具有共享权重的所需的多模型,以进行能源收集的物联网设备。这些共享模型显着降低了记忆足迹,具有不同级别的模型稀疏性,延迟和准确性,以适应环境变化。进一步开发了有效的实施实施体系结构,以有效地执行设备上的每个模型。提出了一种运行时模型提取算法,该算法在触发特定模型模式时以可忽略的开销检索单个模型。实验结果表明,EVE生成的神经网络模型平均比没有修剪和共享的基线模型快2.5倍倍权重。
translated by 谷歌翻译